DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
LogCLEF2010: . . . Logfile Analysis Track Overview
In: http://clef2010.org/resources/proceedings/LogCLEF-2010-Overview.pdf (2010)
BASE
Show details
2
LogCLEF 2009: the CLEF 2009 Multilingual Logfile Analysis Track Overview
In: http://www.clef-campaign.org/2009/working_notes/LogCLEF-2009-Overview-Working-Notes-2009-09-14.pdf (2009)
BASE
Show details
3
GeoCLEF 2008: The CLEF 2008 Cross-Language Geographic Information Retrieval Track Overview
In: http://www.clef-campaign.org/2008/working_notes/GeoCLEF-2008-overview-notebook-paperWNfinal.pdf (2008)
BASE
Show details
4
Recent Developments in the Evaluation of Information Retrieval Systems: Moving Towards Diversity and Practical Relevance
In: http://www.informatica.si/PDF/32-1/12_Mandl%20-%20Recent%20Developments%20in%20the%20Evaluation%20of.pdf (2007)
BASE
Show details
5
Proper Names in the Multilingual CLEF Topic Set
In: http://clef.iei.pi.cnr.it:2002/2003/WN_web/53.pdf (2003)
BASE
Show details
6
Linguistic and Statistical Analysis of the CLEF Topics
In: http://clef.iei.pi.cnr.it:2002/workshop2002/WN/40.pdf (2002)
BASE
Show details
7
Vague Transformations in Information Retrieval
In: http://www.uni-hildesheim.de/~mandl/home/Isi98_mandl.pdf (1998)
BASE
Show details
8
Recent Developments in the Evaluation of Information Retrieval Systems: Moving Toward Diversity and Practical Applications
In: http://www.uni-hildesheim.de/~mandl/Lehre/pkclir/Mandl2005Evaluations.pdf
BASE
Show details
9
CLEF 2008: Ad Hoc Track Overview
In: http://www.clef-campaign.org/2008/working_notes/adhoc-final.pdf
BASE
Show details
10
Easy Tasks Dominate Information Retrieval Evaluation Results
In: http://subs.emis.de/LNI/Proceedings/Proceedings144/112.pdf
Abstract: Abstract: The evaluation of information retrieval systems involves the creation of potential user needs for which systems try to find relevant documents. The difficulty of these topics differs greatly and final scores for systems are typically based on the mean average. However, the topics which are relatively easy to solve, have a much larger impact on the final system ranking than hard topics. This paper presents a novel methodology to measure that effect. The results of a large evaluation experiment with 100 topics from the Cross Language Evaluation Forum (CLEF) allow a split of the topics into four groups according to difficulty. The easy topics have a larger impact especially for multilingual retrieval. Nevertheless the internal test reliability as measured by Cronbach’s Alpha is higher for more difficult topics. We can show how alternative, robust measures like the geometric average distribute the effect of the topics more evenly. 1
URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.651.2551
http://subs.emis.de/LNI/Proceedings/Proceedings144/112.pdf
BASE
Hide details
11
GeoCLEF 2008: The CLEF 2008 Cross-Language Geographic Information Retrieval Track Overview
In: http://www.linguateca.pt/Diana/download/MandletalGeoCLEF2008WN.pdf
BASE
Show details
12
CLEF 2009 Ad Hoc Track Overview: Robust-WSD Task
In: http://www.clef-campaign.org/2009/working_notes/agirre-robustWSDtask-paperCLEF2009.pdf
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern